The European Parliament’s position on the Artificial Intelligence Act (AI Act) was adopted this week (June 14) in a plenary session by a large majority. MEPs voted on the first comprehensive set of rules to manage AI risks and promote AI uses in line with EU values.
 
To help build a resilient Europe in the Digital Decade, the EU has developed a trust-based approach to AI, aimed at stimulating research and industrial capabilities while guaranteeing security and fundamental rights.
 
In April 2021, the EC presented its AI package, which included a communication on fostering a European approach to AI, a review of the Coordinated Plan on Artificial Intelligence and a proposal for a regulation establishing harmonized AI rules (AI Act) and the corresponding impact assessment.
 
During a press conference on June 14, EP President Roberta Metsola said, “First, innovation brings us forward and opens up new possibilities, and as legislators, we need to seize the opportunity. It is about change. It is about understanding that we cannot afford to remain stagnant and about not being afraid of the future. Second, going forward, we aren’t going to need constant, clear boundaries and limits to artificial intelligence, and there is one thing that we will not compromise on. Anytime technology advances, it must go hand in hand with our fundamental rights and democratic values. Finally, we must reconsider how we legislate and think in the face of unlimited access to artificial intelligence, because a new age of scrutiny has begun. There are many things that cannot be digitized: emotions, will and judgments.”
 
Metsola then outlined the days and nights it took and will take to find “a balanced and human-centered approach to the AI Act” that she calls “the world’s first set of legislation that will no doubt be setting the global standard for the years to come.”
 
AI Act adopted by the European Parliament
During the press conference with EP president Roberta Metsola (center) and AI Act rapporteurs Brando Benifei (right) and Dragoş Tudorache (left) (Source: European Parliament)
Leading in AI legislation
 
The first ideological battle was to clearly define what AI is, aligning with the OECD definition and putting European companies on an equal footing with others.
 
MEPs have also given a clear mandate for bottom-up standard setting “to make sure those developing AI have a say in how these technical standards will look, and that helps the agenda of growth and innovation,” MEP and co-rapporteur Dragos Tudorache said.
 
MEPs also ask the EU to take the lead on AI legislation and work with other like-minded democracies “to make sure that we reach alignment on the global stage on how the signposting in the evolution of technology looks and is aligned with the efforts of all parties,” Tudorache said.
 
Banning unacceptable uses
 
With the AI Act, MEP and co-rapporteur Brando Benifei said Europe is opening a dialogue with the rest of the world on how to build responsible AI. “The institutions have built a system of safeguards that can identify the real risks, not overburden AI when it’s not needed to do unnecessary checks but do very serious conformity assessment when there is a high risk for our safety and fundamental rights.”
 
MEPs’ vote aims to forbid unacceptable uses and unbearable risks that go against EU values.
 
Benifei pointed to a last-minute difference of opinion within Parliament on maintaining a clear ban on real-time biometric identification. He explained, “There was a tentative to politicize and to transform this into a propaganda tool, but we have won in the Parliament to maintain a clear safeguard to avoid any risk of mass surveillance and, at the same time, maintain the possibility with the non-real-time biometric identification to pursue criminals and any risks that we have in society.”
 
Detecting AI-generated content
 
With the explosive rise of generative AI, MEPs have expanded the text proposed by the Commission and the Council.
 
“We have gone much further on the regulation of generative AI, general-purpose AI and foundation models,” Benifei said. “With generative AI, we want content that is produced by AI to be recognizable as such, and we want deep fakes not to poison our democracy.”
 
Providers of foundation models will have to assess and mitigate any risks (to health, safety, fundamental rights, the environment, democracy and the rule of law) and register their models in the EU database before they are placed on the EU market.
 
Generative AI systems based on such models as ChatGPT will have to comply with transparency requirements and guarantee safeguards against the generation of illegal content. Detailed summaries of the copyrighted data used in their training will be made available to the public.
 
“We have put some important elements on the table and we need to be clear that no voluntary initiative, no global effort will hinder or influence in a limiting way the work that we are doing to have strong legislation, especially on transparency,” Benifei said.
 
Negotiations with the European Council on the final form of the law began on the same day.